13 research outputs found

    Comparing software prediction techniques using simulation

    Get PDF
    The need for accurate software prediction systems increases as software becomes much larger and more complex. We believe that the underlying characteristics: size, number of features, type of distribution, etc., of the data set influence the choice of the prediction system to be used. For this reason, we would like to control the characteristics of such data sets in order to systematically explore the relationship between accuracy, choice of prediction system, and data set characteristic. It would also be useful to have a large validation data set. Our solution is to simulate data allowing both control and the possibility of large (1000) validation cases. The authors compare four prediction techniques: regression, rule induction, nearest neighbor (a form of case-based reasoning), and neural nets. The results suggest that there are significant differences depending upon the characteristics of the data set. Consequently, researchers should consider prediction context when evaluating competing prediction systems. We observed that the more "messy" the data and the more complex the relationship with the dependent variable, the more variability in the results. In the more complex cases, we observed significantly different results depending upon the particular training set that has been sampled from the underlying data set. However, our most important result is that it is more fruitful to ask which is the best prediction system in a particular context rather than which is the "best" prediction system

    Expert Judgement in Cost Estimating: Modelling the Reasoning Process.

    Get PDF
    Expert Judgement (EJ) is used extensively during the generation of cost estimates. Cost estimators have to make numerous assumptions and judgements about what they think a new product will cost. However, the use of EJ is often frowned upon, not well accepted or understood by non-cost estimators within a concurrent engineering environment. Computerised cost models, in many ways, have reduced the need for EJ but by no means have they, or can they, replace it. The cost estimates produced from both algorithmic and non-algorithmic cost models can be widely inaccurate; and, as the work of this paper highlights, require extensive use of judgement in order to produce a meaningful result. Very little research tackles the issues of capturing and integrating EJ and rationale into the cost estimating process. Therefore, this paper presents a case with respect to the wide use of EJ within cost estimating. EJ is examined in terms of what thought processes are used when a judgement is made. This paper highlights that most judgements are based on the results of referring to historical costs data, and then adjusting up or down accordingly in order to predict the cost of a new project. This is often referred to as analogy. The reasoning processes of EJ are identified and an inference structure has been developed, which represents an abstraction of the reasoning steps used by an expert as they generate an estimate. This model has been validated through both literature and interviews with cost estimating experts across various industry sectors. Furthermore, the key inferences of the experts are identified. These inferences are considered as those where many of the assumptions and expert judgements are made. The thesis of this paper is that through modelling the reasoning processes of EJ, it becomes possible to capture, structure, and integrate EJ and rationale into the cost estimating process as estimates are being generated. Consequently, the rationale capture will both improve the understanding of estimates throughout a product life cycle, and improve management decisions based upon these cost estimates

    An Empirical Evaluation of Two User Interfaces of an Interactive Program Verifier

    No full text
    Theorem provers have highly complex interfaces, but there are not many systematic studies of their usability and effectiveness. Specifically, for interactive theorem provers the ability to quickly comprehend intermediate proof situations is of pivotal importance. In this paper we present the (as far as we know) first empirical study that systematically compares the effectiveness of different user interfaces of an interactive theorem prover. We juxtapose two different user interfaces of the interactive verifier KeY: the traditional one which focuses on proof objects and a more recent one that provides a view akin to an interactive debugger. We carefully designed a controlled experiment where users were given various proof understanding tasks that had to be solved with alternating interfaces. We provide statistical evidence that the conjectured higher effectivity of the debugger-like interface is not just a hunch

    Cognitive dimensions questionnaire applied to exploratory algorithm design

    No full text
    In software engineering, the stage between problem realization and implementation of a solution is not well supported by technology. It is common to see work being carried out on paper or whiteboards. This paper documents a pilot study to identify some reasons as to why paper and whiteboards are useful tools in early exploratory design and exposes some questions about where technology may fit in augmenting this stage of software engineering. The cognitive dimensions questionnaire was used to investigate notations and devices used in exploratory algorithm design

    Dimensions of Concern

    No full text

    Cognitive Dimensions of Notations: Design Tools for Cognitive Technology

    No full text
    “The original publication is available at www.springerlink.com”. Copyright Springer. [Full text of this article is not available in the UHRA]The Cognitive Dimensions of Notations framework has been created to assist the designers of notational systems and information artifacts to evaluate their designs with respect to the impact that they will have on the users of those designs. The framework emphasizes the design choices available to such designers, including characterization of the user’s activity, and the inevitable tradeoffs that will occur between potential design options. The resulting framework has been under development for over 10 years, and now has an active community of researchers devoted to it. This paper first introduces Cognitive Dimensions. It then summarizes the current activity, especially the results of a one-day workshop devoted to Cognitive Dimensions in December 2000, and reviews the ways in which it applies to the field of Cognitive Technology.Peer reviewe
    corecore